A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization

نویسندگان

  • James V. Burke
  • Adrian S. Lewis
  • Michael L. Overton
چکیده

Let f be a continuous function on Rn, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but is a function whose gradient is easily computed where it is defined. We present a practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm. When f is locally Lipschitz and has bounded level sets, and the sampling radius is fixed, we show that, with probability 1, the algorithm generates a sequence with a cluster point that is Clarke -stationary. Furthermore, we show that if f has a unique Clarke stationary point x̄, then the set of all cluster points generated by the algorithm converges to x̄ as is reduced to zero. Numerical results are presented demonstrating the robustness of the algorithm and its applicability in a wide variety of contexts, including cases where f is not locally Lipschitz at minimizers. We report approximate local minimizers for functions in the applications literature which have not, to our knowledge, been obtained previously. When the termination criteria of the algorithm are satisfied, a precise statement about nearness to Clarke -stationarity is available. A matlab implementation of the algorithm is posted at http://www.cs.nyu.edu/overton/papers/gradsamp/alg.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Benson's algorithm for nonconvex multiobjective problems via nonsmooth Wolfe duality

‎In this paper‎, ‎we propose an algorithm to obtain an approximation set of the (weakly) nondominated points of nonsmooth multiobjective optimization problems with equality and inequality constraints‎. ‎We use an extension of the Wolfe duality to construct the separating hyperplane in Benson's outer algorithm for multiobjective programming problems with subdifferentiable functions‎. ‎We also fo...

متن کامل

A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization

We consider optimization problems with objective and constraint functions that may be nonconvex and nonsmooth. Problems of this type arise in important applications, many having solutions at points of nondifferentiability of the problem functions. We present a line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable o...

متن کامل

Analysis of a Belgian Chocolate Stabilization Problem

We give a detailed numerical and theoretical analysis of a stabilization problem posed by V. Blondel in 1994. Our approach illustrates the effectiveness of a new “gradient sampling” algorithm for finding local optimizers of nonsmooth, nonconvex optimization problems arising in control, as well as the power of nonsmooth analysis for understanding variational problems involving polynomial roots a...

متن کامل

Minimizing Nonconvex Population Risk from Rough Empirical Risk

Population risk—the expectation of the loss over the sampling mechanism—is always of primary interest in machine learning. However, learning algorithms only have access to empirical risk, which is the average loss over training examples. Although the two risks are typically guaranteed to be pointwise close, for applications with nonconvex nonsmooth losses (such as modern deep networks), the eff...

متن کامل

A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning

In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorith...

متن کامل

A SMART STOCHASTIC ALGORITHM FOR NONCONVEX OPTIMIZATION A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning

Machine learning theory typically assumes that training data is unbiased and not adversarially generated. When real training data deviates from these assumptions, trained models make erroneous predictions, sometimes with disastrous effects. Robust losses, such as the huber norm, were designed to mitigate the effects of such contaminated data, but they are limited to the regression context. In t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 15  شماره 

صفحات  -

تاریخ انتشار 2005